In the last post, we developed an intuition for matrices. We found that they are just compact representations of linear maps and that adding and multiplying matrices are just ways of combining the underlying linear maps.
In this post, we're going to dive deeper into the world of linear algebra and cover eigenvectors. Eigenvectors are central to Linear Algebra and help us understand many interesting properties of linear maps including:
The effect of applying the linear map repeatedly on an input.
How the linear map rotates the space. In fact eigenvectors were first derived to study the axis of rotation of planets!
Eigenvectors helped early mathematicians study how the planets rotate. Image Source: Wikipedia.
For a more modern example, eigenvectors are at the heart of one of the most important algorithms of all time - the original Page Rank algorithm that powers Google Search.
Our Goals
In this post we're going to try and derive eigenvectors ourselves. To really create a strong motivation, we're going to explore basis vectors, matrices in different bases, and matrix diagonalization. So hang in there and wait for the big reveal - I promise it will be really exciting when it all comes together!
Everything we'll be doing is going to be in the 2D space R^(2)R^2 - the standard coordinate plane over real numbers you're probably already used to.
Basis Vectors
We saw in the last post how we can derive the matrix for a given linear map ff:
f(x)f(x) (as we defined it in the previous section) can be represented by the notation
=[[3,0],[0,5]]= \begin{bmatrix} \textcolor{blue}{3} & \textcolor{#228B22}{0} \\ \textcolor{blue}{0} & \textcolor{#228B22}{5} \end{bmatrix}
This is extremely cool - we can describe the entire function and how it operates on an infinite number of points by a little 4 value table.
But why did we choose [[1],[0]]\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} and [[0],[1]]\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}} to define the columns of the matrix? Why not some other pair like [[3],[3]]\textcolor{blue}{\begin{bmatrix} 3 \\ 3 \end{bmatrix}} and [[0],[0]]\textcolor{#228B22}{\begin{bmatrix} 0 \\ 0 \end{bmatrix}}?
Intuitively, we think of [[1],[0]]\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} and [[0],[1]]\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}} as units that we can use to create other vectors. In fact, we can break down every vector in R^(2)R^2 into some combination of these two vectors.
We can reach any point in the coordinate plan by combining our two vectors.
More formally, when two vectors are able to combine in different ways to create all other vectors in R^(2)R^2, we say that those vectors spanspan the space. The minimum number of vectors you need to span R^(2)R^2 is 2. So when we have 2 vectors that span R^(2)R^2, we call those vectors a basis.
[[1],[0]]\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} and [[0],[1]]\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}} are basis vectors for R^(2)R^2.
You can think of basis vectors as the minimal building blocks for the space. We can combine them in different amounts to reach all vectors we could care about.
We can think of basis vectors as the building blocks of the space - we can combine them to create all possible vectors in the space. Image Source: instructables.com.
Other Basis Vectors for R^(2)R^2
Now are there other pairs of vectors that also form a basis for R^(2)R^2?
Let's start with an example that definitely won't work.
Can you combine these vectors to create [[2],[3]]{\begin{bmatrix} 2 \\ 3 \end{bmatrix}}? Clearly you can't - we don't have any way to move in the yy direction.
No combination of these two vectors could possible get us the vector PP.
Good Example
What about [[1],[0]]\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} and [[1],[1]]\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}?
Our new basis vectors.
Surprisingly, you can! The below image shows how we can reach our previously unreachable point PP.
Note we can can combine 33 units of [[1],[1]]{\begin{bmatrix} 1 \\ 1 \end{bmatrix}} and -1-1 units of [[1],[0]]{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} to get us the vector PP.
I'll leave a simple proof of this as an appendix at the end of this post so we can keep moving - but it's not too complicated so if you're up for it, give it a go! The main thing we've learned here is that:
There are multiple valid bases for R^(2)R^2.
Bases as New Coordinate Axes
In many ways, choosing a new basis is like choosing a new set of axes for the coordinate plane. When we when we switch our basis to say B={[[1],[0]],[[1],[1]]}B = \{\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}, \textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}\}, our axes just rotate as shown below:
As our second basis vector changed from [[0],[1]]\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}} to [[1],[1]]\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}, our y axis rotates to be in line with [[1],[1]]\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}.
As a result of this, the same notation for a vector means different things in different bases.
In the original basis, [[3],[4]]{\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}} meant:
The vector you get when you compute 3*[[1],[0]]+4*[[0],[1]]\textcolor{blue}{3 \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix}} + \textcolor{#228B22}{4 \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix}}.
Or just 3*\textcolor{blue}{3} \cdotfirst basis vector plus 4*\textcolor{#228B22}{4} \cdotsecond basis vector.
In our usual notation, [[3],[4]]{\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}} means 33 units of [[1],[0]]\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} and 44 units of [[0],[1]]\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}
Now when we use a different basis , the meaning of this notation actually changes.
For the basis is B={[[1],[0]],[[1],[1]]}B = \{\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}, \textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}\}, the vector [[3],[4]]_(B)\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}_{B} means:
The vector you get from: 3*[[1],[0]]+4*[[1],[1]]\textcolor{blue}{3 \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix}} + \textcolor{#228B22}{4 \cdot \begin{bmatrix} 1 \\ 1 \end{bmatrix}}.
You can see this change below:
In the notation of basis BB, [[3],[4]]_(B){\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}}_{B} means 33 units of [[1],[0]]\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} and 44 units of [[1],[1]]\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}} giving us point P_(B)P_{B}.
By changing the underlying axes, we changed the location of PP even though it's still called (3,4)(3, 4). You can see this below:
The point PP also changes position when we change the basis. It is still 33 parts first basis vector, 44 parts second basis vector. But since the underlying basis vectors have changed, it also changes.
So the vectors [[3],[4]]{\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}} and [[3],[4]]_(B){\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}}_{B} refer to different actual vectors based on basis BB.
Matrix Notation Based on Bases
Similarly the same notation also means different things for matrices based on the basis. Earlier, the matrix FF for the function ff was represented by:
When I use the basis B={[[1],[0]],[[1],[1]]}B = \{\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}, \textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}\}, the matrix F_(B)F_{B} in basis BB becomes:
We took this short detour into notation for a very specific reason - rewriting a matrix in a different basis is actually a neat trick that allows us to reconfigure the matrix to make it easier to use. How? Let's find out with a quick example.
Let's say I have a matrix FF (representing a linear function) that I need to apply again and again (say 5 times) on a vector vv.
This would be:
F*F*F*F*F*vF \cdot F \cdot F \cdot F \cdot F \cdot v.
Usually, calculating this is really cumbersome.
Can you imagine doing this 5 times in a row? Yeesh. Image Source: Wikipedia.
But let's imagine for a moment that FF was a diagonal matrix (i.e. something like F=[[a,0],[0,b]]F = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix}. If this were the case, then this multiplication would be EASY.
Why? Let's see what F*FF \cdot F is:
F*F=[[a,0],[0,b]]*[[a,0],[0,b]]F \cdot F = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix} \cdot \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix}
F*F=[[a*a+0*0,a*0+0*b],[0*a+b*0,0*0+b*b]]F \cdot F = \begin{bmatrix} a \cdot a + 0 \cdot 0 & a \cdot 0 + 0 \cdot b \\ 0 \cdot a + b \cdot 0 & 0 \cdot 0 + b \cdot b \end{bmatrix}
This is way easier to work with!
So how can we get FF to be a diagonal matrix?
Which Basis makes a Matrix Diagonal?
Earlier, we saw that choosing a new basis makes us change how we write down the matrix. So can we find a basis B={b_(1),b_(2)}B = \{b_1, b_2\} that converts FF into a diagonal matrix?
From earlier, we know that F_(B)F_{B}, the matrix FF in the basis BB, is written as:
Recall our discussion on vector notation in a different basis:
Say my basis is B={[[1],[0]],[[1],[1]]}B = \{\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}, \textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}\}.
Then the vector [[3],[4]]_(B)\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}_{B} means:
The vector you get when you compute: 3*[[1],[0]]+4*[[1],[1]]\textcolor{blue}{3 \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix}} + \textcolor{#228B22}{4 \cdot \begin{bmatrix} 1 \\ 1 \end{bmatrix}}.
Is there a special name for the vectors above b_(1)b_1 and b_(2)b_2 that magically let us rewrite a matrix as a diagonal? Yes! These vectors are the eigenvectors of ff. That's right - you derived eigenvectors all by yourself.
You the real MVP.
More formally, we define an eigenvector of ff as any non-zero vector vv such that:
f(v)=lambda vf(v) = \lambda v
or
F*v=lambda vF \cdot v = \lambda v
The basis formed by the eigenvectors is known as the eigenbasis. Once we switch to using the eigenbasis, our original problem of finding f@f@f@f@f(v)f\circ f\circ f \circ f \circ f (v) becomes:
Well this has all been pretty theoretical with abstract vectors like bb and vv - let's make this concrete with real vectors and matrices to see it in action.
Imagine we had the matrix F=[[2,1],[1,2]]F = \begin{bmatrix}2 & 1 \\ 1 & 2 \end{bmatrix}. Since the goal of this post is not learning how to find eigenvectors, I'm just going to give you the eigenvectors for this matrix. They are:
The eigenbasis is just B={b_(1),b_(2)}B = \{b_1, b_2\}.
What is F_(B)F_{B}, the matrix FF written in the eigenbasis BB?
Since F_(B)=[[f(b_(1))_(B),f(b_(2))_(B)]]F_B = \begin{bmatrix} f(\textcolor{blue}{b_1})_{B} & f(\textcolor{#228B22}{b_2})_{B} \end{bmatrix}, we need to find :
f(b_(1))_(B)f(\textcolor{blue}{b_1})_{B} and f(b_(2))_(B)f(\textcolor{#228B22}{b_2})_{B}
We'll break this down by first finding f(b_(1))f(\textcolor{blue}{b_1}) and f(b_(2))f(\textcolor{#228B22}{b_2}), and rewrite them in the notation of the eigenbasis BB to get f(b_(1))_(B)f(\textcolor{blue}{b_1})_{B} and f(b_(2))_(B)f(\textcolor{#228B22}{b_2})_{B}.
Eigenvectors also have extremely interesting geometric properties worth understanding. To see this, let's go back to the definition for an eiegenvector of a linear map ff and its matrix FF.
An eigenvector, is a vector vv such that:
F*v=lambda vF \cdot v = \lambda v
How are lambda v\lambda v and vv related? lambda v\lambda v is just a scaling of vv in the same direction - it can't be rotated in any way.
Notice how lambda v\lambda v is in the same direction as vv. Image Source: Wikipedia.
In this sense, the eigenvectors of a linear map $f$ show us the axes along which the map simply scales or stretches its inputs.
The single best visualization I've seen of this is by 3Blue1Brown who has a fantastic youtube channel on visualizing math in general.
I'm embedding his video on eigenvectors and their visualizations below as it is the best geometric intuition out there:
Like we saw at the beginning of this post, eigenvectors are not just an abstract concept used by eccentric mathematicians in dark rooms - they underpin some of the most useful technology in our lives including Google Search. For the brave, here's Larry Page and Sergey's original paper on PageRank, the algorithm that makes it possible for us to type in a few letters on a search box and instantly find every relevant website on the internet.
In the next post pagerank, we're going to actually dig through this paper and see how eigenvectors are applied in Google search!
Stay tuned.
Appendix
Proof that [[1],[0]]\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} and [[1],[1]]\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}} span R^(2)R^2:
We know already that [[1],[0]]{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} and [[0],[1]]{\begin{bmatrix} 0 \\ 1 \end{bmatrix}} can be used to reach every coordinate.
We can create [[0],[1]]{\begin{bmatrix} 0 \\ 1 \end{bmatrix}} by computing:
Thus we can combine our vectors to obtain both [[1],[0]]{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} and [[0],[1]]{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}. By point 1, this means every vector in R^(2)R^2 is reachable by combining [[0],[1]]\textcolor{blue}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}} and [[1],[1]]\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}.